In [52]:
from IPython.display import HTML
HTML('''<script> code_show=true;  
function code_toggle() {
 if (code_show){  $('div.input').hide(); 
 } else {  $('div.input').show();  }
 code_show = !code_show
}
$( document ).ready(code_toggle); </script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')


Out[52]:
The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click here.

Value at Risk

Definition

Value at risk (VaR) is a very popular measure of the risk of loss on a portfolio of financial assets. For a given

  • portfolio,
  • time horizon, and
  • probability $p$,

the $100p%$ VaR is defined as a threshold loss value, such that the probability that the loss on the portfolio over the given time horizon exceeds this value is p.

Given a confidence level $\alpha \in (0,1)$, the VaR of the portfolio at the confidence level $\alpha$ is given by the smallest number $l$ such that the probability that the loss $L$ exceeds $l$ is at most $(1-\alpha)$. Mathematically, if $L$ is the loss of a portfolio, then $\operatorname{VaR}_{\alpha}(L)$ is the level $\alpha$-quantile, i.e.

$$\operatorname{VaR}_\alpha(L)=\inf\{l \in \mathbb{R}:P(L>l)\le 1-\alpha\}=\inf\{l\in \mathbb{R}:F_L(l) \ge \alpha\}.$$

Visually:


In [53]:
from IPython.display import Image
Image(url="http://upload.wikimedia.org/wikipedia/commons/6/64/VaR_diagram.JPG")


Out[53]:

In practice

In order to compute a VaR number, one has to have a set of plausible possible scenarios. Some possibilities of generating the scenarios are:

  • Monte Carlo simulation
  • Historical simulation

We consider a type of historical simulation, called Filtered Historical Simulation (FHS). The main characteristic of the FHS is that it abstracts from local market conditions to adapt the scenario generation to current conditions.

FHS Filtered Historical Simulation


In [54]:
import pandas as pd
import numpy as np
import pandas.io.data as web
import datetime

In [55]:
%matplotlib inline

In [56]:
from IPython.html import widgets
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (10,4)
import seaborn as sns

We consider one portfolio consisting of one stock, AAPL. The price of the stock is seen in the figure below.


In [57]:
stock_name = "AAPL"
f = web.DataReader(stock_name, 'yahoo', datetime.datetime(2011,1,1), datetime.datetime(2014,11,28))

In [58]:
_ax = f['Adj Close'].plot(figsize=(10,4))
_ax.set_title(stock_name + " daily closing prices")
_t = _ax.set_ylabel('Price (USD)')


We are interested in the daily price changes, i.e. the daily returns, more specifically the daily log-returns.

$ r_t = log\left( \frac{p_t}{p_{t-1}} \right)$

As you can see, they follow a unimodal distribution.


In [59]:
stock = f['Adj Close']
rets = np.log( stock / stock.shift(1))
rets = rets.ix[1:]
_ax = rets.hist(bins=100)
_ax.set_title('AAPL daily log-returns histogram')
_t = _ax.set_ylabel('Absolute frequency')


and evolve as follows in time


In [60]:
_ax = rets.plot()
_ax.set_ylabel('Log-return value')
_t = _ax.set_title('AAPL daily log-returns evolution')


We are going to

  1. filter the returns in order to make them independent from current volatility

  2. use the returns to generate a set of one-day scenarios, i.e. projections of the possible price evolution based on the history of the stock.

  3. determine the VaR as a percentile of distribution of the computed scenarios

The bootstrapped FHS method requires the observations to be approximately independent and identically distributed. However, most financial return series exhibit some degree of autocorrelation and, more importantly, heteroskedasticity.

For example, the sample autocorrelation function (ACF) of the portfolio returns reveal some mild serial correlation.


In [61]:
from pandas.tools.plotting import autocorrelation_plot

In [62]:
_ax = plt.gca()
_ax.set_ylim( (-0.4, 0.4))
autocorrelation_plot(rets.ix[1:], _ax)


Out[62]:
<matplotlib.axes._subplots.AxesSubplot at 0x11de5c150>

To produce a series of i.i.d. observations, fit a first order autoregressive model to the conditional mean of the portfolio returns.

Exponentially Weighted Standard Deviation

We estimate the local volatility by means of the exponentially weighted standard deviation, given by

$$\sigma^2_t = (1 - \lambda) r_t^2 + \lambda \sigma^2_{t-1}$$

where $\lambda$ is a parameter that should be fitted. A common value is $\lambda = 0.94$, following the recommendation from JP Morgan: http://www.phy.pmf.unizg.hr/~bp/TD4ePt_2.pdf

The impact of the parameter $\lambda$ can be seen in the widget below. There we


In [63]:
def mk_ewma_plot(_lambda):
    com = (1 - _lambda) / _lambda
    ig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10,7))
    _ax = pd.ewmstd( rets, com=com).plot(legend=True, ax=axes[0], label='volatility')
    _ax = rets.plot( legend=True, ax=axes[1], label='log-returns')
    _ax = pd.ewma( rets, com=com).plot(legend=True, ax=axes[1], label='ewma')
    axes[0].xaxis.set_ticklabels([])
    axes[0].set_title('Estimated volatility')
    axes[0].set_ylabel('$\sigma_t$')
    axes[1].yaxis.set_label_text('xxx')


widgets.interact(mk_ewma_plot, _lambda=(0.01, 0.29, 0.01)) #[0.01 * k for k in xrange(1,30)])


Out[63]:
<function __main__.mk_ewma_plot>

Scenario generation

We compute the normalized returns by subtracting the mean and dividing by the standard deviation, as follows

$ \xi_t = \frac{r_t - \mu}{\sigma_t}$

$ R_t = \exp( \xi_t \, \sigma_T ) $


In [64]:
_lambda = 0.09
_com = (1 - _lambda) / _lambda
_std = pd.ewmstd( rets, com=_com)
_mu = pd.ewma(rets, com=_com)

residuals = ( rets / _std )
residuals = residuals.ix[20:]
residuals.hist(bins=50)


Out[64]:
<matplotlib.axes._subplots.AxesSubplot at 0x11e1493d0>

The scenarios are generated using the normalized returns and the current price $P_T$, also know as the "neutral scenario".

$ S_t = R_t P_T $


In [65]:
scenario_shifts = np.exp( residuals * _std[-1])

neutral_scenario = stock.last('1D').ix[0]
scenarios = neutral_scenario * scenario_shifts

In [66]:
def mydistplot(pnls):
    _x = sns.distplot(pnls)
    _l = [_c for _c in _x.get_children() if _c.__class__ == mpl.lines.Line2D][0]
    xxx, yyy = (_l.get_xdata(), _l.get_ydata())
    return _x, xxx, yyy

The distribution of the scenarios and the corresponding tail area is shown in the figure below.


In [67]:
pnls = scenarios.ix[10:] - neutral_scenario
alpha = 0.05
var = -pnls.quantile(q=alpha)
p, xxx, yyy = mydistplot(pnls)
mask = xxx <= - var
# plt.vlines(var, 0, 0.05, color='r')
plt.fill_between(xxx[mask], 0, yyy[mask], facecolor='red', alpha=0.4)


Out[67]:
<matplotlib.collections.PolyCollection at 0x11d5b1850>

In this case, the VaR of the portfolio consisting one stock of AAPL is


In [68]:
var


Out[68]:
1.6535296948216627